Current Issue : October-December Volume : 2025 Issue Number : 4 Articles : 5 Articles
In the mechanical harvesting process, pineapple fruits are prone to damage. Traditional detection methods struggle to quantitatively assess pineapple damage and often operate at slow speeds. To address these challenges, this paper proposes a pineapple mechanical damage detection method based on machine vision, which segments the damaged region and calculates its area using multiple image processing algorithms. First, both color and depth images of the damaged pineapple are captured using a RealSense depth camera, and their pixel information is aligned. Subsequently, preprocessing techniques such as grayscale conversion, contrast enhancement, and Gaussian denoising are applied to the color images to generate grayscale images with prominent damage features. Next, an image segmentation method that combines thresholding, edge detection, and morphological processing is employed to process the images and output the damage contour images with smoother boundaries. After contour-filling and isolation of the smaller connected regions, a binary image of the damaged area is generated. Finally, a calibration object with a known surface area is used to derive both the depth values and pixel area. By integrating the depth information with the pixel area of the binary image, the damaged area of the pineapple is calculated. The damage detection system was implemented in MATLAB, and the experimental results showed that compared with the actual measured damaged area, the proposed method achieved an average error of 5.67% and an area calculation accuracy of 94.33%, even under the conditions of minimal skin color differences and low image resolution. Compared to traditional manual detection, this approach increases detection speed by over 30 times....
The removal of pineapple eyes is a crucial step in pineapple processing. However, their irregularly distributed spiral arrangement presents a dual challenge for positioning accuracy and automated removal by the end-effector. In order to solve this problem, a pineapple eye removal device based on machine vision was designed. The device comprises a clamping mechanism, an eye removal end-effector, an XZ two-axis sliding table, a depth camera, and a control system. Taking the eye removal time and rotational angular velocity as variables, the relationship between the rod length of the prime mover and the contact force and gear torque during the eye removal process was simulated and analyzed using ADAMS (2020) software. Based on these simulations, the optimal length of the prime mover for the end-effector was determined to be 23.00 mm. The performance of various YOLOv5 models was compared in terms of accuracy, recall rate, mean detection error, and detection time. The YOLOv5s model was chosen for real-time pineapple eye detection, and the eye’s position was determined through coordinate transformation. The control system then actuated the XZ two-axis sliding table to position the eye removal end-effector for effective removal. The results indicated an average complete removal rate of 88.5%, an incomplete removal rate of 6.6%, a missed detection rate of 4.9%, and an average removal time of 156.7 s per pineapple. Compared with existing solutions, this study optimized the end-effector design for pineapple eye removal. Depth information was captured with a depth camera, and machine vision was combined with three-dimensional localization. These steps improved removal accuracy and increased production efficiency....
With the continuous advancement of industrialization and intelligentization, stereo-vision-based measurement technology for large-scale components has become a prominent research focus. To address weak-textured regions in large-scale component images and reduce mismatches in stereo matching, we propose a cross-scale multi-feature stereo matching algorithm. In the cost-computation stage, the sum of absolute differences (SAD), census, and modified census cost aggregation are employed as cost-calculation methods. During the cost-aggregation phase, cross-scale theory is introduced to fuse multi-scale cost volumes using distinct aggregation parameters through a cross-scale framework. Experimental results on both benchmark and real-world datasets demonstrate that the enhanced algorithm achieves an average mismatch rate of 12.25%, exhibiting superior robustness compared to conventional census transform and semi-global matching (SGM) algorithms....
This study explores the application of a machine vision system integrated with convolutional neural network (CNN) for detecting and classifying welding defects. By leveraging the power of deep learning approaches, the proposed approach aims to enhance the efficiency and reliability of defect classification. This method not only reduces human dependency, but also establishes a framework for automated welding quality control systems. A CNN-based machine vision system has been developed to classify welding defects in radiographic images. Particularly, two transfer learning algorithms, mainly, ResNet-18 and ResNet-50, have been applied and evaluated in order to determine the most effective method in detecting and classifying weld defects. The dataset covered three classes of weld defects: cracks, lack of penetration, and porosity. The performance of each ResNet-based CNN model was assessed using performance evaluation metrics and visualization techniques. ResNet-50 emerged as the best performing model and had a strongest response in the weld defects regions, achieving an average accuracy of 96.061%. This model proved effective in detecting and classifying defects, demonstrating its potential to significantly enhance the reliability and automation of detection and recognition....
To ensure the effective implementation of food waste reduction in college cafeterias, Capital Normal University developed an automatic plate recognition system based on machine vision technology. The system operates by obtaining images of plates (whether clean or not) and the diners’ faces through multi-directional monitoring, then employs several deep learning models for the automatic localization and identification of the plates. Face recognition technology links the identification results of the plates to the diners. Additionally, the system incorporates innovative educational mechanisms such as online feedback and point redemption to encourage student participation and foster thrifty habits. These initiatives also provide more accurate training samples, enhancing the system’s precision and stability. Our findings indicate that machine vision technology is suitable for rapid identification and location of clean plates. Even without optimized network parameters, the U-Net network demonstrates high recognition accuracy (MIOU of 68.64% and MPA of 78.21%) and ideal convergence speed. Pilot data showed a 13% reduction in overall waste in the cafeteria and over 75% user acceptance of the mechanism. The implementation of this system has significantly improved the efficiency and accuracy of plate recognition, offering an effective solution for food waste prevention in college canteens....
Loading....